156 research outputs found

    A current-driven six-channel potentiostat for rapid performance characterization of microbial electrolysis cells

    Get PDF
    Knowledge of the performance of microbial electrolysis cells under a wide range of operating conditions is crucial to achieve high production efficiencies. Characterizing this performance in an experiment, however, is challenging due to either the long measurement times of steady-state procedures or the transient errors of dynamic procedures. Moreover, wide parallelization of the measurements is not feasible due to the high measurement equipment cost per channel. Hence, to speedup this characterization and to facilitate low-cost, yet widely parallel measurements, this paper presents a novel rapid polarization curve measurement procedure with a dynamic measurement resolution that runs on a custom six-channel potentiostat with a current-driven topology. As case study, the procedure is used to rapidly assess the impact of altering pH values on a microbial electrolysis cell that produces H-2. A ×2\times 2 - ×12\times 12 speedup could be obtained in comparison with the state-of-the-art, depending on the characterization resolution (16-128 levels). On top of this speedup, measurements can be parallelized up to 6×6\times on the presented, affordable-42-per-channel-potentiostat

    A 0.3-2.6 TOPS/W Precision-Scalable Processor for Real-Time Large-Scale ConvNets

    Get PDF
    A low-power precision-scalable processor for ConvNets or convolutional neural networks (CNN) is implemented in a 40nm technology. Its 256 parallel processing units achieve a peak 102GOPS running at 204MHz. To minimize energy consumption while maintaining throughput, this works is the first to both exploit the sparsity of convolutions and to implement dynamic precision-scalability enabling supply- and energy scaling. The processor is fully C-programmable, consumes 25-288mW at 204 MHz and scales efficiency from 0.3-2.6 real TOPS/W. This system hereby outperforms the state-of-the-art up to 3.9x in energy efficiency.Comment: Published at the Symposium on VLSI Circuits, 2016, Honolulu, HI, U

    A 64-channel, 1.1-pA-accurate on-chip potentiostat for parallel electrochemical monitoring

    Get PDF
    Electrochemical monitoring is crucial for both industrial applications, such as microbial electrolysis and corrosion monitoring as well as consumer applications such as personal health monitoring. Yet, state-of-the-art integrated potentiostat monitoring devices have few parallel channels with limited flexibility due to their channel architecture. This work presents a novel, widely scalable channel architecture using a switch capacitor based Howland current pump and a digital potential controller. An integrated, 64-channel CMOS potentiostat array has been fabricated. Each individual channel has a dynamic current range of 120dB with 1.1pA precision with up to 100kHz bandwidth. The on-chip working electrodes are post-processed with gold to ensure (bio)electrochemical compatibility

    Understanding interdependency through complex information sharing

    Full text link
    The interactions between three or more random variables are often nontrivial, poorly understood, and yet, are paramount for future advances in fields such as network information theory, neuroscience, genetics and many others. In this work, we propose to analyze these interactions as different modes of information sharing. Towards this end, we introduce a novel axiomatic framework for decomposing the joint entropy, which characterizes the various ways in which random variables can share information. The key contribution of our framework is to distinguish between interdependencies where the information is shared redundantly, and synergistic interdependencies where the sharing structure exists in the whole but not between the parts. We show that our axioms determine unique formulas for all the terms of the proposed decomposition for a number of cases of interest. Moreover, we show how these results can be applied to several network information theory problems, providing a more intuitive understanding of their fundamental limits.Comment: 39 pages, 4 figure

    Benchmarking and modeling of analog and digital SRAM in-memory computing architectures

    Full text link
    In-memory-computing is emerging as an efficient hardware paradigm for deep neural network accelerators at the edge, enabling to break the memory wall and exploit massive computational parallelism. Two design models have surged: analog in-memory-computing (AIMC) and digital in-memory-computing (DIMC), offering a different design space in terms of accuracy, efficiency and dataflow flexibility. This paper targets the fair comparison and benchmarking of both approaches to guide future designs, through a.) an overview of published architectures; b.) an analytical cost model for energy and throughput; c.) scheduling of workloads on a variety of modeled IMC architectures for end-to-end network efficiency analysis, offering valuable workload-hardware co-design insights

    An affordable multichannel potentiostat with 128 individual stimulation and sensing channels

    Get PDF
    (Bio)electrochemical reactions are a promising, environmentally friendly alternative for many chemical processes. These processes, however, are known to be slow in time, to be strongly dependent on the environment and to vary between different samples. This necessitates research on studying optimal operating conditions of the (bio)electrochemical cells. Yet, current experiments have to rely on slow, sequential tests. To overcome these, this work proposes a potentiostat with 128 parallel channels to speed up research experiments. The 128-channel potentiostat makes extensive use of time-sharing and is implemented with PCB technology resulting in a cost-per-channel of only 5$, 4x lower than the state-of-the-art (SotA) and an area-per-channel of ≈ 93 mm 2 , 5x lower than the SotA. Realtime digital compensation of each individual channel is used to obtain a channel-to-channel mismatch below 1%. A cyclic voltametry experiment on all channels simultaneously illustrates the low channel-to-channel mismatch. A chronoamperometry experiment with 128 different potential steps in parallel illustrates the 128x experiment speedup

    A Review on Internet of Things Solutions for Intelligent Energy Control in Buildings for Smart City Applications

    Get PDF
    A smart city exploits sustainable information and communication technologies to improve the quality and the performance of urban services for citizens and government, while reducing resources consumption. Intelligent energy control in buildings is an important aspect in this. The Internet of Things can provide a solution. It aims to connect numerous heterogeneous devices through the internet, for which it needs a flexible layered architecture in which the things, the people and the cloud services are combined to facilitate an application task. Such flexible IoT hierarchical architecture model will be introduced in this paper with an overview of each key component for intelligent energy control in buildings for smart cities

    DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling

    Full text link
    DNN workloads can be scheduled onto DNN accelerators in many different ways: from layer-by-layer scheduling to cross-layer depth-first scheduling (a.k.a. layer fusion, or cascaded execution). This results in a very broad scheduling space, with each schedule leading to varying hardware (HW) costs in terms of energy and latency. To rapidly explore this vast space for a wide variety of hardware architectures, analytical cost models are crucial to estimate scheduling effects on the HW level. However, state-of-the-art cost models are lacking support for exploring the complete depth-first scheduling space, for instance focusing only on activations while ignoring weights, or modeling only DRAM accesses while overlooking on-chip data movements. These limitations prevent researchers from systematically and accurately understanding the depth-first scheduling space. After formalizing this design space, this work proposes a unified modeling framework, DeFiNES, for layer-by-layer and depth-first scheduling to fill in the gaps. DeFiNES enables analytically estimating the hardware cost for possible schedules in terms of both energy and latency, while considering data access at every memory level. This is done for each schedule and HW architecture under study by optimally choosing the active part of the memory hierarchy per unique combination of operand, layer, and feature map tile. The hardware costs are estimated, taking into account both data computation and data copy phases. The analytical cost model is validated against measured data from a taped-out depth-first DNN accelerator, DepFiN, showing good modeling accuracy at the end-to-end neural network level. A comparison with generalized state-of-the-art demonstrates up to 10X better solutions found with DeFiNES.Comment: Accepted by HPCA 202
    • …
    corecore